2,106 research outputs found
Modeling and inference of multisubject fMRI data
Functional magnetic resonance imaging (fMRI) is a
rapidly growing technique for studying the brain in
action. Since its creation [1], [2], cognitive scientists
have been using fMRI to understand how we remember,
manipulate, and act on information in our environment.
Working with magnetic resonance physicists, statisticians, and
engineers, these scientists are pushing the frontiers of knowledge
of how the human brain works.
The design and analysis of single-subject fMRI studies
has been well described. For example, [3], chapters 10
and 11 of [4], and chapters 11 and 14 of [5] all give accessible
overviews of fMRI methods for one subject. In contrast,
while the appropriate manner to analyze a group of
subjects has been the topic of several recent papers, we do
not feel it has been covered well in introductory texts and
review papers. Therefore, in this article, we bring together
old and new work on so-called group modeling of fMRI
data using a consistent notation to make the methods more
accessible and comparable
Cluster Failure Revisited: Impact of First Level Design and Data Quality on Cluster False Positive Rates
Methodological research rarely generates a broad interest, yet our work on
the validity of cluster inference methods for functional magnetic resonance
imaging (fMRI) created intense discussion on both the minutia of our approach
and its implications for the discipline. In the present work, we take on
various critiques of our work and further explore the limitations of our
original work. We address issues about the particular event-related designs we
used, considering multiple event types and randomisation of events between
subjects. We consider the lack of validity found with one-sample permutation
(sign flipping) tests, investigating a number of approaches to improve the
false positive control of this widely used procedure. We found that the
combination of a two-sided test and cleaning the data using ICA FIX resulted in
nominal false positive rates for all datasets, meaning that data cleaning is
not only important for resting state fMRI, but also for task fMRI. Finally, we
discuss the implications of our work on the fMRI literature as a whole,
estimating that at least 10% of the fMRI studies have used the most problematic
cluster inference method (P = 0.01 cluster defining threshold), and how
individual studies can be interpreted in light of our findings. These
additional results underscore our original conclusions, on the importance of
data sharing and thorough evaluation of statistical methods on realistic null
data
Reply to Chen et al.: Parametric methods for cluster inference perform worse for two-sided t-tests
One-sided t-tests are commonly used in the neuroimaging field, but two-sided
tests should be the default unless a researcher has a strong reason for using a
one-sided test. Here we extend our previous work on cluster false positive
rates, which used one-sided tests, to two-sided tests. Briefly, we found that
parametric methods perform worse for two-sided t-tests, and that non-parametric
methods perform equally well for one-sided and two-sided tests
Permutation Inference for Canonical Correlation Analysis
Canonical correlation analysis (CCA) has become a key tool for population
neuroimaging, allowing investigation of associations between many imaging and
non-imaging measurements. As other variables are often a source of variability
not of direct interest, previous work has used CCA on residuals from a model
that removes these effects, then proceeded directly to permutation inference.
We show that such a simple permutation test leads to inflated error rates. The
reason is that residualisation introduces dependencies among the observations
that violate the exchangeability assumption. Even in the absence of nuisance
variables, however, a simple permutation test for CCA also leads to excess
error rates for all canonical correlations other than the first. The reason is
that a simple permutation scheme does not ignore the variability already
explained by previous canonical variables. Here we propose solutions for both
problems: in the case of nuisance variables, we show that transforming the
residuals to a lower dimensional basis where exchangeability holds results in a
valid permutation test; for more general cases, with or without nuisance
variables, we propose estimating the canonical correlations in a stepwise
manner, removing at each iteration the variance already explained, while
dealing with different number of variables in both sides. We also discuss how
to address the multiplicity of tests, proposing an admissible test that is not
conservative, and provide a complete algorithm for permutation inference for
CCA.Comment: 49 pages, 2 figures, 10 tables, 3 algorithms, 119 reference
Increasing power for voxel-wise genome-wide association studies : the random field theory, least square kernel machines and fast permutation procedures
Imaging traits are thought to have more direct links to genetic variation than diagnostic measures based on cognitive or clinical assessments and provide a powerful substrate to examine the influence of genetics on human brains. Although imaging genetics has attracted growing attention and interest, most brain-wide genome-wide association studies focus on voxel-wise single-locus approaches, without taking advantage of the spatial information in images or combining the effect of multiple genetic variants. In this paper we present a fast implementation of voxel- and cluster-wise inferences based on the random field theory to fully use the spatial information in images. The approach is combined with a multi-locus model based on least square kernel machines to associate the joint effect of several single nucleotide polymorphisms (SNP) with imaging traits. A fast permutation procedure is also proposed which significantly reduces the number of permutations needed relative to the standard empirical method and provides accurate small p-value estimates based on parametric tail approximation. We explored the relation between 448,294 single nucleotide polymorphisms and 18,043 genes in 31,662 voxels of the entire brain across 740 elderly subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI). Structural MRI scans were analyzed using tensor-based morphometry (TBM) to compute 3D maps of regional brain volume differences compared to an average template image based on healthy elderly subjects. We find method to be more sensitive compared with voxel-wise single-locus approaches. A number of genes were identified as having significant associations with volumetric changes. The most associated gene was GRIN2B, which encodes the N-methyl-d-aspartate (NMDA) glutamate receptor NR2B subunit and affects both the parietal and temporal lobes in human brains. Its role in Alzheimer's disease has been widely acknowledged and studied, suggesting the validity of the approach. The various advantages over existing approaches indicate a great potential offered by this novel framework to detect genetic influences on human brains
Reply to Brown and Behrmann, Cox, et al., and Kessler et al. : Data and code sharing is the way forward for fMRI
We are glad that our paper (1) has generated intense discussions in the fMRI field (2⇓–4), on how to analyze fMRI data, and how to correct for multiple comparisons. The goal of the paper was not to disparage any specific fMRI software, but to point out that parametric statistical methods are based on a number of assumptions that are not always valid for fMRI data, and that nonparametric statistical methods (5) are a good alternative. Through AFNI’s introduction of nonparametric statistics in the function 3dttest++ (3, 6), the three most common fMRI softwares now all support nonparametric group inference [SPM through the toolbox SnPM (www2.warwick.ac.uk/fac/sci/statistics/staff/academic-research/nichols/software/snpm), and FSL through the function randomise]
A defense of using resting state fMRI as null data for estimating false positive rates
A recent Editorial by Slotnick (2017) reconsiders the findings of our paper on the accuracy of false positive rate control with cluster inference in fMRI (Eklund et al, 2016), in particular criticising our use of resting state fMRI data as a source for null data in the evaluation of task fMRI methods. We defend this use of resting fMRI data, as while there is much structure in this data, we argue it is representative of task data noise and as such analysis software should be able to accommodate this noise. We also discuss a potential problem with Slotnick’s own method
Dynamic filtering of static dipoles in magnetoencephalography
We consider the problem of estimating neural activity from measurements
of the magnetic fields recorded by magnetoencephalography. We exploit
the temporal structure of the problem and model the neural current as a
collection of evolving current dipoles, which appear and disappear, but whose
locations are constant throughout their lifetime. This fully reflects the physiological
interpretation of the model.
In order to conduct inference under this proposed model, it was necessary
to develop an algorithm based around state-of-the-art sequential Monte
Carlo methods employing carefully designed importance distributions. Previous
work employed a bootstrap filter and an artificial dynamic structure
where dipoles performed a random walk in space, yielding nonphysical artefacts
in the reconstructions; such artefacts are not observed when using the
proposed model. The algorithm is validated with simulated data, in which
it provided an average localisation error which is approximately half that of
the bootstrap filter. An application to complex real data derived from a somatosensory
experiment is presented. Assessment of model fit via marginal
likelihood showed a clear preference for the proposed model and the associated
reconstructions show better localisation
- …